ai21 lab
Human or Not? A Gamified Approach to the Turing Test
Jannai, Daniel, Meron, Amos, Lenz, Barak, Levine, Yoav, Shoham, Yoav
"I believe that in 50 years' time it will be possible to make computers play the imitation game so well that an average interrogator will have no more than 70% chance of making the right identification after 5 minutes of questioning." Over the course of a month, the game was played by over 1.5 million users who engaged in anonymous two-minute chat sessions with either another human or an AI language model which was prompted to behave like humans. The task of the players was to correctly guess whether they spoke to a person or to an AI. This largest scale Turing-style test conducted to date revealed some interesting facts. For example, overall users guessed the identity of their partners correctly in only 68% of the games. In the subset of the games in which users faced an AI bot, users had even lower correct guess rates of 60% (that is, not much higher than chance). While this experiment calls for many extensions and refinements, these findings already begin to shed light on the inevitable near future which will commingle humans and AI. The famous Turing test, originally proposed by Alan Turing in 1950 as "the imitation game" (Turing, 1950), was proposed as an operational test of intelligence, namely, testing a machine's ability to exhibit behavior indistinguishable from that of a human. In this proposed test, a human evaluator engages in a natural language conversation with both another human and a machine, and tries to distinguish between them. If the evaluator is unable to tell which is which, the machine is said to have passed the test.
- Asia > Russia (0.14)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.05)
- Europe > Ukraine (0.04)
- (8 more...)
- Research Report (1.00)
- Personal (0.88)
- Government (0.93)
- Health & Medicine (0.93)
- Media (0.68)
- Leisure & Entertainment > Games > Computer Games (0.46)
Amazon Bedrock: New Suite of Generative AI Tools Unveiled by AWS
AWS has entered the red-hot realm of generative AI with the introduction of a suite of generative AI development tools. The cornerstone of these is Amazon Bedrock, a tool for building generative AI applications using pre-trained foundation models accessible via an API through AI startups like AI21 Labs, Anthropic, and Stability AI, as well as Amazon's own Titan family of foundation models (FMs). Bedrock offers serverless integration with AWS tools and capabilities, enabling customers to find the right model for their needs, customize it with their data, and deploy it without managing costly infrastructure. Amazon states that the infrastructure supporting the Bedrock service will employ a mix of Amazon's proprietary AI chips (AWS Trainium and AWS Inferentia) and GPUs from Nvidia. AWS is positioning Bedrock as a way to democratize FMs, as training these large models can be prohibitively expensive for many companies.
Is AI the Next Gold Rush? Global Investment Skyrockets 633.33% - TechBullion
From OpenAI's ChatGPT to Google's creation of its own ChatGPT version (Bard), artificial intelligence is set to revolutionise everything in the modern world. How much are corporate investors willing to put their money at risk with AI technology? And which companies get the lion's share of these investments? To answer these questions, writerbuddy.ai The data was collected from CrunchBase, NetBase Quid, S&P Capital IQ, and NFX.
- Information Technology > Services (0.36)
- Banking & Finance (0.31)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.84)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.31)
Top Large Language Models (LLMs) in 2023 from OpenAI, Google AI, Deepmind, Anthropic, Baidu, Huawei, Meta AI, AI21 Labs, LG AI Research and NVIDIA - MarkTechPost
Large language models are computer programs that can analyze and create text. They are trained using massive amounts of text data, which helps them become better at tasks like generating text. Language models are the foundation for many natural language processing (NLP) activities, like speech-to-text and sentiment analysis. These models can look at a text and predict the next word. Examples of LLMs include ChatGPT, LaMDA, PaLM, etc. Parameters in LLMs help the model to understand relationships in the text, which helps them to predict the likelihood of word sequences.
- Telecommunications (0.41)
- Information Technology > Hardware (0.40)
AI21 Labs Bets on Accuracy, Develops Approach for Factual AI - The New Stack
ChatGPT is impressive, but it's missing a vital component. That's according to Ehud Karpas, a squad director at AI21 Labs, which develops generative AI for text. It does things that are really mind-blowing," Karpas told The New Stack. "I think I should say this: A good text needs to be fluent, and it needs to be engaging. But I don't think that's the whole story.
AI21 Labs Announces The Future Of Writing, Challenging OpenAI
Tel-Aviv-based AI21 Labs launched today Wordtune Spices, a writer-augmentation tool based on generative AI. Selecting from 12 different cues, writers can generate a range of textual options to add to and enhance sentences. Spices can also suggest statistics to strengthen an argument or sharpen a detail. AI21 says Spices is not intended to replace writers but to function as a writing assistant, suggesting additional complete sentences that improve and enhance the text that is being written. It could help refine and enrich the main message of the text, bolster and enrich arguments, and add creative expressions such as a joke or inspirational quote. The Israeli startup claims to have solved one of the major issues with popular applications based on Large Language Models (LLMs) such as OpenAI's ChatGPT which do not give source credit.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (1.00)
AI21 Jurassic-1 foundation model is now available on Amazon SageMaker
Today we are excited to announce that AI21 Jurassic-1 (J1) foundation models are available for customers using Amazon SageMaker. Jurassic-1 models are highly versatile, capable of both human-like text generation, as well as solving complex tasks such as question answering, text classification, and many others. You can easily try out this model and use it with Amazon SageMaker JumpStart. JumpStart is the machine learning (ML) hub of SageMaker that provides access to foundation models in addition to built-in algorithms and end-to-end solution templates to help you quickly get started with ML. In this post, we walk through how to use the Jurassic-1 Grande model in SageMaker.
Stanford debuts first AI benchmark to help understand LLMs
Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. In the world of artificial intelligence (AI) and machine learning (ML), 2022 has arguably been the year of foundation models, or AI models trained on a massive scale. From GPT-3 to DALL-E, from BLOOM to Imagen -- another day, it seems, another large language model (LLM) or text-to-image model. But until now, there have been no AI benchmarks to provide a standardized way to evaluate these models, which have developed at a rapidly-accelerated pace over the past couple of years. Don't miss our new special issue: Zero trust: The new security paradigm.
Best Practices for Deploying Language Models
Cohere, OpenAI, and AI21 Labs have developed a preliminary set of best practices applicable to any organization developing or deploying large language models. Computers that can read and write are here, and they have the potential to fundamentally impact daily life. The future of human-machine interaction is full of possibility and promise, but any powerful technology needs careful deployment. The joint statement below represents a step towards building a community to address the global challenges presented by AI progress, and we encourage other organizations who would like to participate to get in touch. We're recommending several key principles to help providers of large language models (LLMs) mitigate the risks of this technology in order to achieve its full promise to augment human capabilities.
AI21 Labs raises $64M to help it compete against OpenAI
AI21 Labs has raised $64 million in a funding round to help it compete against OpenAI and other NLP leaders. Competition in NLP (Natural Language Processing) is heating up. OpenAI is currently seen as the industry leader with its GPT-3 model but rivals are gaining traction. Investors see AI21 Labs as one of the most promising contenders. "We completed this round during a period of market uncertainty, which highlights the confidence our investors have in AI21's vision to change the way people consume and produce information," said Ori Goshen, Co-Founder and Co-CEO of AI21 Labs.
- North America > United States > California (0.06)
- Europe > Netherlands > North Holland > Amsterdam (0.06)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.85)